When the battle is over, everyone is an excellent commander - the Czech proverb says. OO Modeling and Design has been published almost five years ago (1991) and much of the OMT methodology has changed since then. Nevertheless, people still use it as a basic textbook of OO methodology. What valuable they can still learn from this book and what they should be critical of?
(see James Rumbaugh in the picture below)
All the authors were computer scientists at General Electric Corporate Research and Development, Schenectady, New York, (US). Nothing more is known to me about Michael Blaha, William Premerlani and William Lorensen. James Rumbaugh became recently a fellow at Rational Software Corporation, 2800 San Tomas Expy., Santa Clara, CA. Frederick Eddy has worked at GE for 20 years. Both James Rumbaugh and Frederick Eddy are popular speakers on OMT in the US and Europe (see RSE 2/94).
Rumbaugh, J., Blaha, M., Premerlani, W., Eddy, F., Lorensen, W.: Object-Oriented Modeling and Design, Prentice-Hall, Englewood Cliffs, NJ 1991
The book ends with Glossary of terms, with answers to selected exercises and with index.
Go back to: Start of this review, RSE Contents
Go back to: Start of this review, RSE Contents
System requirements express various aspects of a common purpose of the system. The system is analyzed and its parts are recognized only in context of the system purpose. Any subsystem, any object class and any object instance doesn't make sense independently on the system purpose.
Purpose of the system is the first thing, the analyst should be interested in. The purpose of the system can be expressed in the problem statement or it can be synthesized out of use-cases (like in OOSE). The purpose of the system should be agreed with requestor to establish its common understanding correctly. That is the first step on the way to developing valid system that will satisfy the needs and expectations of the requestor.
OMT doesn't mention purpose of the system. Purpose of the system has to be grasped intuitively. Correct understanding of the purpose is neglected. Subsystems, classes and object instances are not checked against the purpose.
Go back to: Start of commentary, Start of this review, RSE Contents
It's probably true that
On contrary, e.g. a power station can't be programmed this way (see problems that require software engineer to be also an expert in them).
The question remains, though: what is the real world, what are the real world concepts, real objects etc.? I asked this question several times but I got no answer yet. We could discuss vague terms as long as we would wish. But we can not base an engineering on vague terms. As I still can not seriously recognize the real world modeling, I hope a bit of philosophy might be helpful for understanding the problem. Let's try to realize the distinction between objective and subjective, what objective reality means, what truth means, what are the ways of cognition. Let's start discussing in more detail how system can be specified...
Go back to: Start of commentary, Start of this review, RSE Contents
Let's remember: specification concerns what is specific (and in the special case of system specification, it concerns what is specific for the system being specified).
Specification does not concern
System specification is always specification of external behavior of the system, nothing else.
OMT doesn't approach fixing the system boundaries. It doesn't state criteria that make possible to differentiate objects inside the system from objects outside the system. If the system boundary remains unknown, behavior of the system to the external world can not be determined. Consequently, the system specification can not be obtained (i.e. requirements can not be stated).
What good is a methodology of software engineering when the software engineer doesn't know what problem is to be solved? What kind of engineering it is that solves unformulated problems? How the software engineer can find out whether the results of his work are valid (i.e. whether he has done, what had been to be done)?
All those problems can be solved in OMT and some of them already have been solved. This is not a crucial objection against how OMT specifies systems. Rather the distinction between OMT or OOA and OBA or OOSE gets apparent. This distinction manifests the deep discord between two major approaches to the process of cognition - mysticism and behaviorism.
The topics follow:
Go back to: Start of commentary, Start of this review, RSE Contents
The mystics believe that truths are absolute and exist only objectively in the world. The truths are considered to be one aspect of some spiritual being (e.g. God). Truths are basically hidden but they can be recognized. How? The unity with the spiritual being (and this way with the truths themselves) must be reached first (e.g. by prayer or meditation). This way, the human being can participate in the truths and truths become revealed to her or him. She or he can even tell the recognized truths then to other persons - but only to persons being in spiritual unity with the truths. Who doesn't want to understand, he can't understand.
So, truth can't be recognized unless
Go back to: Approaches to cognition, Start of commentary, Start of this review, RSE Contents
American behaviorism (which can be considered as a more recent development stage of empiricism and of Marxist gnoseology that is well known to me) admits that the absolute, hidden truths exist objectively. As regards cognition, only relative, partial and subjective truths can be learned by individuals in every moment. Cognition is a never ending, stepwise process of verifying the old knowledge, denying it and grasping more and more true understanding of the objective reality. Every particular knowledge can be acquired by experiencing behavior of the objective reality. The question remains, though, how pieces of knowledge constitute understanding?
It happens sometimes that the newly acquired knowledge contradicts another knowledge. (Thus, correct thinking is assumed but rules of correct thinking are not explained in detail.) Behaviorists base their judgment on experience - the truth should be verified practically and the knowledge should be corrected. This way, understanding gets one step closer to the absolute truth.
So, truth is approved by neither
but by
Go back to: Approaches to cognition, Start of commentary, Start of this review, RSE Contents
Rationalists assume another cognitive process: thinking. They believe, the correct thinking makes the hidden truths apparent to them. They assume, they know rules of correct thinking. The question remains though, whether the rules are correct? Conversely, logical incorrectness detects either incorrectness of the recognized matter or incorrectness of some logical rule. So, verifying the logical correctness carefully, precisely and systematically allows
Making judgments among various approaches to cognition definitely is not a matter of software engineering. Neither observing people tossing about between the unconscious existence of the mere materia and the conscious participation in absolute truths is the issue, however amusing it may be. None of the approaches should be ignored in software engineering. Every one contains its moral that brings benefit to software engineering if applied properly.
Go back to: Approaches to cognition, Start of commentary, Start of this review, RSE Contents
OOA, OMT and the similar mystical methodologies consider that only some absolute, invariant truths are revealed to experts. They don't consider any cognitive process at all. They completely ignore what I am (as a Marxist with state examination) scientifically convinced of: every recognized truth is relative. According to software engineering mystics, the whole software development is just a kind of periodical reengineering of the revealed systems. Thus, applicability of the mystical methodologies is limited to cases, where the software engineer may rely on the secondary cognition mediated by an expert. Let's realize, not every software system may be specified this way.
Even an expert had to create its system of concepts. He might do that indirectly, again (he might learn from another expert), but nonetheless somebody sometimes had to experience the objective reality on his own. And there was no other source of information available to him than the external effects of the part of the objective reality under exploration (i.e. behavior). What's more: his own experience with the behavior of objective reality served also as a verification that the acquired knowledge was relatively true. So far Marxist gnoseology and American behaviorism.
Mystics assume that the cognition of software engineer is mediated by expert (which may apply to expert cognition, as well). How about validity and truthfulness of the mediated knowledge? The mystics simply say: "everybody does it this way, so it must be correct". This sentence says something about the pure validity of the recognized matter but it says nothing of how far the recognized matter is relatively true. Thus, it is a relevant way of validation, but not of verification. Accepting mediated experience tends to the greater uncertainty about the ratio of cognition than experiencing the whole process of cognition. The process of cognition involves: knowledge gathering, searching for appropriate relationships among notions, constructing the system (logically as correct as possible) and verifying it.
Conversely, OBA, OOSE and the other behavioral methodologies employ merely the technology of the immediate cognition. They ignore the technology of the indirect, mediated cognition (learning from experts). Software behaviorists have no reason to disapprove expert knowledge. That knowledge can't be obtained neither easily nor quickly. Behavior analysis may be of no use facing the complex problem and short term (which can be met otherwise).
Go back to: Approaches to cognition, Start of commentary, Start of this review, RSE Contents
Exploring a nuclear power station with the methods of behavior analysis can cause an immediate environmental catastrophe. Modeling a nuclear power station according to information mediated by an expert is of no use except for causing environmental catastrophe... Who goes to program the steam power station, he must be software engineer, as well as expert in physics, in measurement and control, in mechanical and electrical engineering etc.
Let's consider an example, how the opening and closing of valves progresses in time. The progress of opening or closing should depend on the progress of pressure in the pipes before and behind the valve and on construction of the pipes. Leveling the pressure up should progress in such a way, so that it doesn't take too much time and, on the other hand, so that the pipes get not damaged. There is perhaps impossible finding a material or a way of construction so that the pipes can endure anything, especially if the pressures and temperatures as well as possibilities of their changes are like those in steam power station.
Some economists claim even of accounting that the programmer must be also expert in accounting (they say with at least two years of experience). I don't believe them (and know why).
Go back to: Approaches to cognition, Start of commentary, Start of this review, RSE Contents
Problem domain can be modeled according to ideas of an expert. Such a model can be used as a system specification. It specifies well how the system behaves externally. The internal structure of the model, however, doesn't probably meet requirements on good design, since the expert is expert in problem domain but not in system design. Never mind - the specification of external behavior determines a class of all equivalent designs.
A system designer (or software engineer) is to construct another, well designed model of problem domain (e.g. problem domain component according to OOA & OOD or ideal model according to OOSE). The specification proves then validity of the construction.
Unfortunately, the mystical methodologies don't imply a conclusion that specification and construction are two different things. According to mystical methodologies, software engineers may directly use specification as a component when they construct the system. They may neglect good design of that component. This way, quality of the whole system may get spoiled.
The problem is, though, ensuring that the new, well designed system is equivalent with the original specification. Validating after the construction is done means validating late. The technology is needed that allows constructing the equivalent system directly - following some procedure would ensure the equivalence. That means, the designer must make each particular decisions with that information, which is available in the particular moment. The designer can't base his decisions on information that would be available after the design is done.
Go back to: Start of commentary, Start of this review, RSE Contents
OMT doesn't take into account lifecycles and their synchronization when particular objects in an object model are distinguished. All object modeling or entity modeling approaches that ignore object lifecycles and their correct synchronization are necessarily intuitive. They are based on vague terms like dependency. Everybody, who models data or normalizes relational databases, may differ from anybody else in how he understands dependency but they all may feel common agreement. What kind of methodology it is, where what one holds for dependency, the other may not consider to be dependency, and they both may be right?
Let's have a look at OOSE, for example. OOSE doesn't speak about dependencies but it operates with well defined terms like interaction (and we can go on: communication, synchronization, communication protocol, object lifecycle etc.) This way, obvious criteria are stated that allow
Objects get revealed only by their behavior. Abstracting from object behavior makes recognizing objects impossible.
Go back to: Start of commentary, Start of this review, RSE Contents
OMT recommends functional modeling to capture the functional aspect of the system. Functional model consists of data transformation processes, data flows and data stores. Both the stores and flows are wholes with their identity, memory and behavior. Processes are also wholes with their identity. They also have behavior. There are two kinds of processes: pure functions and sequential processes. Behavior of a pure function depends only on input data. Behavior of a sequential process depends not only on input data but on the previous history, too (which means that the process must remember history, of course, e.g. by staying in some of its internal states). Pure functions may be considered a special case of sequential processes (they remain always in the same state). Although pure functions don't need memory, sequential processes (as a more general concept) definitely may have memory. That implies: processes, flows and stores are objects.
Functional model is duplicate to both object model and dynamic model. The same thing that can be drawn as a function in functional model can be drawn as an object class in object model, too. The same applies to data flows and data stores. And how about the dynamic model? Dynamic model differs from the object model since it expresses sequencing of actions that is not expressed clearly in object model. Nonetheless, functional model expresses sequencing of actions, as well. Data must be produced prior to they are consumed. Consequently, every data flow (which connects data producer with data consumer processes) expresses sequencing of those processes implicitly. Transformation from functional to object model and backwards is possible. Transformation to dynamic model and backwards is also possible. Functional model is rather another way of drawing than another aspect of the same system.
Although transformations among models are possible, they make a little sense if various models are constructed independently. It seems difficult to find out, whether a functional model is equivalent with an object model or a dynamic model. Why? Various models may express various decompositions of the same system or subsystem. Thus, there may not exist any one-to-one mapping between models. And it is probably hard to approve, whether an appropriate structural transformation exists. Thus, consistency of the system can hardly be kept. What's more: functional model allows creating loopbacks, i.e. introducing sequential behavior and memory using the mere data flow (no stores are necessary - see figure). That kind of memory can hardly be mapped to object model.
I don't claim that functional model doesn't make sense. I claim, it is duplicate (or redundant) - so it makes the same sense as object or dynamic model. I see the problem in redundancy and inconsistency of the system rather than the functional model itself. I see as a problem what OMT admits: I may construct several mutually independent models of the same system and I can't get any knowledge about their equivalence. If I make some change in one of those models, I will not be able to make any relevant change in the other models. This way, I may introduce logical inconsistencies into the system being developed. I suppose, a methodology should prevent constructing logically inconsistent systems, not promote it.
We can see good solution of similar problem: consistency of object and dynamic model seems to be ensured well. The dynamic model might get redundant to both functional and object model. However, its role is clearly defined: it should model lifecycles of particular objects. On contrary, the object model captures structure of the object system (except of command structures) and abstracts from lifecycles and from sequencing of actions. The smalltalk believers may dislike that idea - they would like perhaps to emphasize that even the command structure is a structure of objects. As concerns me, I don't need to emphasize what János von Neumann discovered some fifty years ago: command structures and data structures are essentially the same, so they can be handled in the same manner. I would like to emphasize how the command structures are special (not every data structure is appropriate to hold commands). That approach allows me better to utilize results of at least seventy years of computer science and the essential discoveries made by Alan Turing and Alonzo Church. This is not a crucial conflict, however, the matter is nothing more than what one likes more or less.
Possible solutions appear now obviously: Functional model can be entirely omitted (it is redundant), or some clear boundary should be found between functional model and object model and between functional model and dynamic model (like the boundary that exists between dynamic model and object model). The latter proposal avoids redundancy. Other solutions may probably be found, too.
Go back to: Start of commentary, Start of this review, RSE Contents
It is not correct that attribute is a pure value. Attribute can change its value - which is a sign of identity. James Rumbaugh in his series [2] in JOOP, particularly in vol.8 nr. 1 (1995), distinguishes between attribute slots and attribute values. See also particular remarks to pg. 23 and to pg. 162
In fact, attribute slot is the same object as any other object is. What does it mean? It is a whole with its own identity. It behaves and remembers. Nevertheless, the notion of attribute regards rather the association between the holder object of an attribute and the attribute slot than the attribute slot itself. The attribute association is a special kind of container-contents association.
Attribute holder and slot use the association in order to communicate and synchronize with each other. Attribute holder and slot should follow some rules of communication - a communication protocol. That communication protocol is part of their lifecycles (and also of the lifecycle of the attribute association). There are some typical constraints set on the attribute synchronization, e.g. both the holder and slot must be created at once and destroyed at once. Attribute is usually used privately inside its holder (so external messages are not rerouted to attribute by its holder).
Go back to: Start of commentary, Start of this review, RSE Contents
Objects don't live solely but they are connected together. OMT mentions associations between two or more objects and links (instances of associations). Complex whole-part structures are called aggregations. OOSE introduces additionally communication associations as a special kind of association. Similar kind of communication connection can be found in OOA. Although communication is just a purpose, why the association is established (so it is regular association), the mere mentioning communication implies temporal point of view.
Every link (in terms of OMT) is object and has its memory as well as behavior and lifecycle. The parties (all objects connected by the link) must be known as the link is established and remembered for the whole life of the link. Behavior of a link is constituted by the communication itself. Behavior of a link is defined in the respective association - that definition can be called communication protocol.
Unfortunately, neither OMT, nor any other methodology I know, explains how the communication protocol should be designed correctly. Simplifying the methodology and abstracting from communication protocols is dangerous: although one designs the system according to a methodology, the objects may still be unable to communicate correctly. The incorrect communication may not be able to transfer the desired information, it may produce nonsense data or cause deadlocks.
In systems consisting of several dozens of object classes or more and living a dynamically rich life (instances are created and destroyed often), the direct steady links from instance to instance are definitely not a sufficient means of communication. Complex, compound long-distance temporary connections must be created, run and destroyed dynamically. The system of objects is a regular communication network, indeed. Considering just direct connections from object to object and their communication protocols applies just to the link layer of the network. But there should be designed also network, transport, presentation, session and application layers and all the respective protocols in the system (in accord with ISO/OSI network architecture - or other analogous layers and protocols in accord with another network architecture). Otherwise, although the link or even network communication protocol may be designed correctly, the system may not work well because it may crash e.g. on the session layer.
Go back to: Start of commentary, Start of this review, RSE Contents
The new development of 2-nd generation OMT stated some constraints on functional modeling. One of them is that only pure functions and pure values are allowed in Object-Oriented Data Flow Diagrams (OODFD). This makes the functional model similar to a logical scheme: any dynamics (and thus sequencing) is involved only implicitly by data flows (data must be produced prior to consuming them, of course). The author considers iterations and conditionals to be forms of control and recommends to avoid them. I don't see any reason for avoiding conditionals since they are pure functions (e.g. cond is a regular LISP function). Iterations can be avoided easily since any iteration can be transformed to recursion (like in LISP) or to loopback (like in logical scheme) - none of those is disabled in OODFDs. Or should be some additional constraints stated on OODFDs? Anyway, the problems mentioned in this review are still not solved.
Go back to: Start of this review, RSE Contents
OMT contains many useful advises and guidelines for software designers and programmers (or implementors). Obviously, there is much experience in the background of the book and any experience needs time and effort to be acquired. It is much easier to learn from other people (i.e. from the authors of OMT) than to make ones own experience (and pay for own mistakes).
The book is extraordinarily sound in dynamic modeling. Nevertheless, the underlying theory of software engineering and computer science is not explained there. Thus, practitioners with the weak theoretical background may not understand (e.g. race conditions and their proper handling).
Although extensive parts in the book are very well explained and contain much of useful information (still, 5 years after publication!), other (and unfortunately essential) places are at least vague. This applies particularly to concepts of object oriented approach, system specifications and object modeling. Additionally, the mere applicability of functional modeling is controversial. Obviously, it is easy to say it today - who knew it five years ago? Nonetheless, who goes to read the book today, he needs this information.
The reader should be well acquainted with computer science (especially with theory of abstract data types, theory of automata and theory of concurrent processes) prior to he or she starts reading Object-Oriented Modeling and Design. The vague and inaccurate places in the book may cause feeling of misunderstanding or disagreement and may provoke fruitful rethinking or discussions. Conversely, feeling of common understanding signalizes misunderstanding, indeed!
Object-Oriented Modeling and Design is still definitely worth of reading, although with proper theoretical background and some criticism.
Go back to: Start of this review, RSE Contents
Go back to: Start of this review, RSE Contents
Go back to: Start of this review, RSE Contents
Go back to: Start of this review, RSE Contents